7 research outputs found

    Kinect Range Sensing: Structured-Light versus Time-of-Flight Kinect

    Full text link
    Recently, the new Kinect One has been issued by Microsoft, providing the next generation of real-time range sensing devices based on the Time-of-Flight (ToF) principle. As the first Kinect version was using a structured light approach, one would expect various differences in the characteristics of the range data delivered by both devices. This paper presents a detailed and in-depth comparison between both devices. In order to conduct the comparison, we propose a framework of seven different experimental setups, which is a generic basis for evaluating range cameras such as Kinect. The experiments have been designed with the goal to capture individual effects of the Kinect devices as isolatedly as possible and in a way, that they can also be adopted, in order to apply them to any other range sensing device. The overall goal of this paper is to provide a solid insight into the pros and cons of either device. Thus, scientists that are interested in using Kinect range sensing cameras in their specific application scenario can directly assess the expected, specific benefits and potential problem of either device.Comment: 58 pages, 23 figures. Accepted for publication in Computer Vision and Image Understanding (CVIU

    Simultaneous 2D and 3D Video Rendering

    Get PDF
    The representation of stereoscopic video on a display is typically enabled either by using active shutter or polarizing viewing glasses in the television sets and displays available for end users. It is likely that in some usage situations some viewers do not wear viewing glasses at all times and hence it would be desirable if the stereoscopic video content could be tuned in the rendering device in such a manner that it could be simultaneously watched with and without viewing glasses with an acceptable quality. In this thesis, a novel video rendering technique is proposed and implemented in the post-processing stage which enables good quality both stereoscopic and traditional 2D video perception of the same content. This has been accomplished by manipulating of one view in the stereoscopic video by making it more similar to the other view in order to reduce the ghosting artifact perceived when the content is watched without viewing glasses while stereoscopic perception is maintained. The proposed technique includes three steps: disparity selection, contrast adjustment, and low-pass-filtering. Through an extensive series of subjective tests, the proposed approach has been evaluated to show that stereoscopic content can be viewed without glasses with an acceptable quality. The proposed methods resulted in a lower bitrate stereoscopic video stream requiring a smaller bandwidth for broadcasting

    Simultaneous 2D and 3D Video Rendering

    Get PDF
    The representation of stereoscopic video on a display is typically enabled either by using active shutter or polarizing viewing glasses in the television sets and displays available for end users. It is likely that in some usage situations some viewers do not wear viewing glasses at all times and hence it would be desirable if the stereoscopic video content could be tuned in the rendering device in such a manner that it could be simultaneously watched with and without viewing glasses with an acceptable quality. In this thesis, a novel video rendering technique is proposed and implemented in the post-processing stage which enables good quality both stereoscopic and traditional 2D video perception of the same content. This has been accomplished by manipulating of one view in the stereoscopic video by making it more similar to the other view in order to reduce the ghosting artifact perceived when the content is watched without viewing glasses while stereoscopic perception is maintained. The proposed technique includes three steps: disparity selection, contrast adjustment, and low-pass-filtering. Through an extensive series of subjective tests, the proposed approach has been evaluated to show that stereoscopic content can be viewed without glasses with an acceptable quality. The proposed methods resulted in a lower bitrate stereoscopic video stream requiring a smaller bandwidth for broadcasting

    Pulse Based Time-of-Flight Range Sensing

    No full text
    Pulse-based Time-of-Flight (PB-ToF) cameras are an attractive alternative range imaging approach, compared to the widely commercialized Amplitude Modulated Continuous-Wave Time-of-Flight (AMCW-ToF) approach. This paper presents an in-depth evaluation of a PB-ToF camera prototype based on the Hamamatsu area sensor S11963-01CR. We evaluate different ToF-related effects, i.e., temperature drift, systematic error, depth inhomogeneity, multi-path effects, and motion artefacts. Furthermore, we evaluate the systematic error of the system in more detail, and introduce novel concepts to improve the quality of range measurements by modifying the mode of operation of the PB-ToF camera. Finally, we describe the means of measuring the gate response of the PB-ToF sensor and using this information for PB-ToF sensor simulation

    Simultaneous 2D and 3D perception for stereoscopic displays based on polarized or active shutter glasses

    Get PDF
    Viewing stereoscopic 3D content is typically enabled either by using polarizing or active shutter glasses. In certain cases, some viewers may not wear viewing glasses and hence, it would be desirable to tune the stereoscopic 3D content so that it could be simultaneously watched with and without viewing glasses. In this paper we propose a video post-processing technique which enables good quality 3D and 2D perception of the same content. This is done through manipulation of one view by making it more similar to the other view to reduce the ghosting artifact perceived without viewing glasses while 3D perception is maintained. The proposed technique includes three steps: disparity selection, contrast adjustment, and low-pass filtering. The proposed approach was evaluated through an extensive series of subjective tests, which also revealed good adjustment parameters to suit viewing with and without viewing glasses with an acceptable 3D and 2D quality, respectively.Peer reviewe

    Comprehensive Use of Curvature for Robust and Accurate Online Surface Reconstruction

    No full text

    RGB-D Sensors Data Quality Assessment and Improvement for Advanced Applications

    No full text
    Since the advent of the first Kinect as a motion control device for the Microsoft XBOX platform (November 2010), several similar active and low-cost range sensing devices, capable of capturing a digital RGB image and the corresponding depth map (RGBD), have been introduced in the market. Although initially designed for the video gaming market with the scope of capturing an approximated 3D image of a human body in order to create gesture-based interfaces, RGBD sensors’ low cost and their ability to gather streams of 3D data in real-time with a frame rate of 15 to 30 fps, boosted their popularity for several other purposes, including 3D multimedia interaction, robot navigation, 3D body scanning for garment design and proximity sensors for automotive design. However, data quality is not the RGBD sensors’ strong point, and additional considerations are needed for maximizing the amount of information that can be extracted by the raw data, together with proper criteria for data validation and verification. The present chapter provides an overview of RGBD sensors technology and an analysis of how random and systematic 3D measurement errors affect the global 3D data quality in the various technological implementations. Typical applications are also reported, with the aim of providing readers with the basic knowledge and understanding of the potentialities and challenges of this technology
    corecore